6 research outputs found
Development and verification of time-dependent bounding surface model under metro dynamic loads
To study the dynamic characteristics of soft soil foundation under the long-term metro dynamic loads, modified model based on the bounding surface model was presented. The Mesri creep formula was introduced into the bounding surface model, then it could not only consider the effects of time but also could describe the soil’s arbitrary shear stress levels. The modified bounding surface model was derived using the Newton-Raphson method and the secondary development of the model was conducted. Meanwhile, in order to verify the model, the dynamic triaxial tests of the soft soil were conducted by GDS dynamic triaxial equipment and the metro dynamic loads were simulated during dynamic triaxial tests. Then, the numerical simulation of modified bounding surface model was carried out for soft soil and the numerical results were compared with the test results. The results show that the time-dependent bounding surface model provides a more accurate calculation for the dynamic strain, and establishes a theoretical foundation for predicting the settlement of the soft soil
Towards Good Practices in Evaluating Transfer Adversarial Attacks
Transfer adversarial attacks raise critical security concerns in real-world,
black-box scenarios. However, the actual progress of this field is difficult to
assess due to two common limitations in existing evaluations. First, different
methods are often not systematically and fairly evaluated in a one-to-one
comparison. Second, only transferability is evaluated but another key attack
property, stealthiness, is largely overlooked. In this work, we design good
practices to address these limitations, and we present the first comprehensive
evaluation of transfer attacks, covering 23 representative attacks against 9
defenses on ImageNet. In particular, we propose to categorize existing attacks
into five categories, which enables our systematic category-wise analyses.
These analyses lead to new findings that even challenge existing knowledge and
also help determine the optimal attack hyperparameters for our attack-wise
comprehensive evaluation. We also pay particular attention to stealthiness, by
adopting diverse imperceptibility metrics and looking into new, finer-grained
characteristics. Overall, our new insights into transferability and
stealthiness lead to actionable good practices for future evaluations.Comment: An extended version can be found at arXiv:2310.11850. Code and a list
of categorized attacks are available at
https://github.com/ZhengyuZhao/TransferAttackEva
Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights
Transferable adversarial examples raise critical security concerns in
real-world, black-box attack scenarios. However, in this work, we identify two
main problems in common evaluation practices: (1) For attack transferability,
lack of systematic, one-to-one attack comparison and fair hyperparameter
settings. (2) For attack stealthiness, simply no comparisons. To address these
problems, we establish new evaluation guidelines by (1) proposing a novel
attack categorization strategy and conducting systematic and fair
intra-category analyses on transferability, and (2) considering diverse
imperceptibility metrics and finer-grained stealthiness characteristics from
the perspective of attack traceback. To this end, we provide the first
large-scale evaluation of transferable adversarial examples on ImageNet,
involving 23 representative attacks against 9 representative defenses. Our
evaluation leads to a number of new insights, including consensus-challenging
ones: (1) Under a fair attack hyperparameter setting, one early attack method,
DI, actually outperforms all the follow-up methods. (2) A state-of-the-art
defense, DiffPure, actually gives a false sense of (white-box) security since
it is indeed largely bypassed by our (black-box) transferable attacks. (3) Even
when all attacks are bounded by the same norm, they lead to dramatically
different stealthiness performance, which negatively correlates with their
transferability performance. Overall, our work demonstrates that existing
problematic evaluations have indeed caused misleading conclusions and missing
points, and as a result, hindered the assessment of the actual progress in this
field.Comment: Code is available at
https://github.com/ZhengyuZhao/TransferAttackEva
Enhancing Robustness Verification for Deep Neural Networks via Symbolic Propagation
Abstract Deep neural networks (DNNs) have been shown lack of robustness, as they are vulnerable to small perturbations on the inputs. This has led to safety concerns on applying DNNs to safety-critical domains. Several verification approaches based on constraint solving have been developed to automatically prove or disprove safety properties for DNNs. However, these approaches suffer from the scalability problem, i.e., only small DNNs can be handled. To deal with this, abstraction based approaches have been proposed, but are unfortunately facing the precision problem, i.e., the obtained bounds are often loose. In this paper, we focus on a variety of local robustness properties and a ( δ , ε ) -global robustness property of DNNs, and investigate novel strategies to combine the constraint solving and abstraction-based approaches to work with these properties: We propose a method to verify local robustness, which improves a recent proposal of analyzing DNNs through the classic abstract interpretation technique, by a novel symbolic propagation technique. Specifically, the values of neurons are represented symbolically and propagated from the input layer to the output layer, on top of the underlying abstract domains. It achieves significantly higher precision and thus can prove more properties. We propose a Lipschitz constant based verification framework. By utilising Lipschitz constants solved by semidefinite programming, we can prove global robustness of DNNs. We show how the Lipschitz constant can be tightened if it is restricted to small regions. A tightened Lipschitz constantcan be helpful in proving local robustness properties. Furthermore, a global Lipschitz constant can be used to accelerate batch local robustness verification, and thus support the verification of global robustness. We show how the proposed abstract interpretation and Lipschitz constant based approaches can benefit from each other to obtain more precise results. Moreover, they can be also exploited and combined to improve constraints based approach. We implement our methods in the tool PRODeep, and conduct detailed experimental results on several benchmarks </jats:p
Towards Practical Robustness Analysis for DNNs based on PAC-Model Learning
To analyse local robustness properties of deep neural networks (DNNs), we
present a practical framework from a model learning perspective. Based on
black-box model learning with scenario optimisation, we abstract the local
behaviour of a DNN via an affine model with the probably approximately correct
(PAC) guarantee. From the learned model, we can infer the corresponding
PAC-model robustness property. The innovation of our work is the integration of
model learning into PAC robustness analysis: that is, we construct a PAC
guarantee on the model level instead of sample distribution, which induces a
more faithful and accurate robustness evaluation. This is in contrast to
existing statistical methods without model learning. We implement our method in
a prototypical tool named DeepPAC. As a black-box method, DeepPAC is scalable
and efficient, especially when DNNs have complex structures or high-dimensional
inputs. We extensively evaluate DeepPAC, with 4 baselines (using formal
verification, statistical methods, testing and adversarial attack) and 20 DNN
models across 3 datasets, including MNIST, CIFAR-10, and ImageNet. It is shown
that DeepPAC outperforms the state-of-the-art statistical method PROVERO, and
it achieves more practical robustness analysis than the formal verification
tool ERAN. Also, its results are consistent with existing DNN testing work like
DeepGini